Cloud computing is becoming a mainstream technology. However, cloud providers are beginning to see price pressure to reduce their costs to remain competitive.

CIOs need to be aware that with downward price pressures, the excess capacity that cloud providers have available for peak traffic drives cloud provider’s profits down, which may alter the relationship and service the CIO receives.

A behemoth cloud provider like Amazon can afford to have excess capacity, while for smaller providers, it's a challenge that must be solved in order to compete and survive.

Larger cloud providers have already started to differentiate themselves through new and alternative services. It’s how they compete with Amazon and also how they entice CIOs at large enterprises to commit to cloud adoption.

As larger enterprises sign up for cloud services, their CIO’s expect detailed SLAs, security, disaster recovery capabilities and management tools as part of the offering. This focus on services beyond basic capacity is starting to drive up costs, just as customisation has done for enterprise software.

But excess capacity is not idle capacity. If non-time sensitive services can use the capacity, then something that previously dragged down profit margins for a service provider now becomes a revenue driver. With CIO’s demanding services like security, disaster recovery and management, cloud providers conveniently have a service roadmap laid out ahead of them. But if existing cloud providers are slow to act, start-ups will rush in to seize this opportunity.

Unified communications (UC), another key strategic move being adopted by CIOs is now moving to a cloud model. CIOs may want to consider cloud operators who can offer UC as a way of making the most of their capacity.

"UC has traditionally been deployed either as a static cloud service with few features and no customisation or in a highly customisable, chassis-based environment on the company's premise," says Jon Brinton, president of cloud provider Mitel NetSolutions.

With this excess cloud capacity being available, Mitel has been able to create a UC cloud service by offering not just the UC software, but a virtual private data centre on which to run and mange it. The service includes computing, memory, storage and connectivity.

"This new kind of cloud-based service, which wouldn't be possible without the excess of cloud capacity we enjoy today, allows customers to rapidly deploy UC," says Brinton.

Some of these advances actually hark back to the past. Before cloud became all the rage, a number of "grid computing" companies raked in serious VC funding. Probably the most famous grid project is [email protected] This UC-Berkeley hosted project uses idle computing capacity to help search for extraterrestrial intelligence (SETI stand for Search for Extraterrestrial Intelligence).

SETI uses radio telescopes to listen for narrow-bandwidth radio signals from space. Such signals are not known to occur naturally, so detection would provide evidence of extraterrestrial technology. SETI was originally powered by supercomputers, but then researcher David Gedy proposed creating a virtual supercomputer. SETI offers people screen saver software, and when the screen saver kicks in and indicates that a PC is idle, that idle capacity is used to analyse data from the SETI project radio telescopes.

Lately, most grid projects have been expensive and confined to compute-intensive fields such as the pharmaceutical industry.

Now, grid's cousin, cloud computing, is poised to upend the grid model too and here again start ups are pioneering new models.

Start up Cycle Computing was founded to offer cloud-based utility supercomputing services. Recently, the company spun up a 50,000-core virtual supercomputer using Amazon Web Services. The cluster was used to help develop drug compounds for cancer research. The customer, computational chemistry company Schrödinger, used the cloud-based virtual supercomputer to analyse 21 million drug compounds in three hours at a cost of just over £3,000.

Using traditional computing models, a drug discovery company would have to spend $12 million to $18 million on infrastructure alone, and even with that in place, the drug analysis process would take hundreds of hours to complete, not three.

"Having access to a virtual supercomputer frees researchers up," says Jason Stowe, CEO of Cycle Computing. "In the past, companies would have to scale down their research questions to match their infrastructure. You couldn't think too big because your internal 1,500-core cluster could only handle so much."

Stowe also says he believes that excess capacity represents an opportunity for more than just cloud providers. As businesses increasingly adopt virtual desktop infrastructures, powerful servers will be operating at near capacity from 9 to 5, and then they will be completely idle until the next morning.

Perhaps, like people with solar panels on their roofs, in the near future businesses will be able to sell capacity back to service providers. Drug discovery, financial risk management and engineering projects could all benefit from cheap, large-scale virtual supercomputing.