High performance computing (HPC) has long been the preserve of academic institutions and highly specialised industries needing to run data hungry applications that carry out advanced analytics and rapid simulations.

However, in recent years, the use of HPC has started to extend into other sectors, thanks to advances in processor power, clustering and storage technology alongside open source operating systems and software and, more recently, cloud.

These advances have driven down the cost of putting together HPC systems and given organisations access to infrastructure that they couldn’t previously afford.

If these market forces weren’t enough to encourage enterprises to use HPC, last October Chancellor George Osborne also announced he would allocate £145 million for HPC and other new technologies to make the UK a “world leader in supercomputing”.

But, while HPC is used by top research groups studying the intricacies of the universe or cracking the genetic code that make us, what will the main benefit of HPC be to the enterprise?

Real-time business process monitoring maybe one such area. Andrew Carr, UK and Ireland sales and marketing director at French IT vendor Bull, says that this is a growing area of focus for many businesses as it is “an effective way to achieve comprehensive compliance and risk management”.

“Typically this is a very compute intensive process, but there is a growing recognition that HPC can be used effectively to enable organisations to manage this process in an efficient and cost-effective manner.”

Barbara Murphy, chief marketing officer at high performance parallel storage company Panasas says “the ability to perform a deeper level of analysis and pursuing a wider variety of ‘what-ifs’ can mean a significantly improved picture of how the business is operating, in addition to simply tracking a greater number of metrics.”

Complex event processing is a new area for HPC, says Clive Longbottom, service director at analyst firm Quocirca. Many business processes are relatively simple, but some get pretty complex, crossing boundaries not just within the business but also across value chains, including partners, consultants, contractors, customers and suppliers. “HPC can help to monitor and manage these processes in real time, enabling organisations to better compete in their markets,” he says.

But it is not just BPM that could benefit from HPC. “We have customers using HPC for crop sciences, to improve animal breeding programmes to attain specific traits such as enhanced milk production or meat taste, new chemical engineering, material sciences (particularly nanoscale which will be essential for future medical breakthroughs), weather and hurricane modelling, car and airplane tyre wear patterns, simulation of precise coverage of a potato chip with savoury flavouring,” says Murphy.

So, although there are now business examples of HPC being used outside the walls of universities, is infrastructure cheap enough for enterprises to exploit the benefits of HPC?

Murphy believes this to be true. She says that HPC is now a realistic proposition for many segments of the enterprise, where it could be used to tackle the Big Data challenges that come with design and discovery workloads:

“Research and development intensive companies are increasingly moving to simulation and synthesis for their design and discovery efforts, requiring a high performance computing infrastructure. This can be seen in 3D rendering, computational fluid dynamics, genomics, financial modelling, oil and gas simulation, aeronautics, manufacturing and materials sciences.”

Longbottom says that HPC has moved from specific high cost machines to clusters and to virtualised, relatively commoditised equipment, which means that HPC is now affordable to organisations of all sizes.

“Indeed, they can rent HPC via cloud providers and get what they need for no capital outlay and low resource charges. However, some workloads do still need specialised engines or a mainframe but these are generally outside what a general enterprise would need.”

While HPC has come down in price, it is still not cheap. So, if your organisation has a real need for HPC, the next question is whether you should implement your own HPC infrastructure or rent out third-party infrastructure on a pay-as-you-go basis.

According to Carr, enterprises no longer have to lay out significant amounts of capital and have their own HPC infrastructure in-house. “Instead, they can now rent as and when they need,” he says. “This is as a result of the emergence of a completely new delivery model for HPC, known as HPC-on-demand.”

This is essentially HPC in the cloud. It allows enterprises to buy access to data-hungry computing resource, rather than investing in their own on-site hardware and software. Companies adopting this new approach no longer have to worry about the complexities of running their own environment or the negative impact that doing so might have on their profit and loss account, while also allowing them to scale up when tackling big data requirements. They can tap into the available computing capability across the web as and when required.

Longbottom says that in many cases where the workload is non-continuous, renting makes the most sense. “For those with massive needs (e.g. pharma, oil and gas, aeronautics), dedicated HPC still makes sense - although some parts could be hived off to external rented systems,” he adds.

Also, while many providers are now offering clouds with HPC capabilities, Murphy warns that dependence on the public cloud model could create problems for enterprises that depend on high performance computing to derive their value.

“If you are a hedge fund and your value comes from modelling the market, your IP cannot leave the secure environment of the trading simulation floor,” she says.

Murphy warns that the public cloud is still widely considered less secure than on-premise infrastructure, and enterprises that consider their data critical to value creation are less likely to move their core IP into a shared environment.

Rudolf Fischer, High Performance Computing senior pre-sales manager at NEC Europe echoes these concerns: “Cloud is yet another usage scenario, which is very useful for select cases, and not so useful for others. There will be coexistence between cloud and on-premise HPC, and both will have their own business cases.”

Murphy also warns that performance does not wholly collate to raw computing power and that this is an issue for HPC in the public cloud.

“The performance requirements for HPC (it is in the name after all) are hard to achieve in the typical public cloud infrastructure,” she adds. Networks in HPC environments tend to be either InfiniBand or 10GB Ethernet, both much faster than what most public clouds are based on.

She adds that many technical computing applications require storage capacity ranging from hundreds of terabytes to multiple petabytes. “It is not practical or cost effective to move this amount of data over the network to the public cloud, or even to store it in the cloud for that matter,” says Murphy.

Given these caveats, does high performance computing mean a step change in what is possible for the enterprise?

Longbottom argues that the technology involved in HPC represents an evolutionary process. “We’ve gone from the supercomputers of old (the Crays and so on) to clusters, virtualisation and then from grid computing to cloud.”

The biggest change happened some time back, he adds – the move from dedicated silicon and operating systems to commodity X86 servers running Linux and Windows. “Now, it is just how best to wire all of the bits together and how best to orchestrate the workloads - this is where the major part of the HPC work is now focused.”

Murphy says though that more widely available HPC promises a step change in what the enterprise can do with its data. It is one of the components of the Big Data revolution, and that means IT departments must adapt further. “This step change is what has to happen inside the enterprise,” she says. “IT has to change to accommodate a different storage and compute paradigm.”

The traditional enterprise was a highly structured, IOPS intensive world. The new paradigm, she says, is bandwidth intensive, highly parallel and massively scalable to keep up with the rate of data growth. “Systems have to be dynamic, easily scalable, and massively parallel to achieve the performance levels required — and they still have to be reliable and easy to use.”

Infrastructure advances while allowing many more enterprises the benefits of HPC also bring new challenges for IT – not all software can take advantage of the new platforms.

Longbottom says it is possible to take an existing application and run it on an HPC platform by ‘fooling it’ into thinking it is sitting on a physical platform that just happens to be elastic. “Abstraction technology is improving all the time and can manage this,” he says. However, he argues that native apps will always perform better than have been force-fitted on to an HPC platform.

Traditional approaches to HPC argued that for it to be cost effective, it has to return a faster or better answer than could be achieved using existing technology.

Today, however, the combination of new processor power, networking, storage, database and application technology is bringing HPC type power to any organisation with the imagination to use it. Get ready to ride the next wave of data-driven innovation.